2,078 research outputs found

    Pushing the glass transition towards random close packing using self-propelled hard spheres

    Full text link
    Although the concept of random close packing with an almost universal packing fraction of ~ 0.64 for hard spheres was introduced more than half a century ago, there are still ongoing debates. The main difficulty in searching the densest packing is that states with packing fractions beyond the glass transition at ~ 0.58 are inherently non-equilibrium systems, where the dynamics slows down with a structural relaxation time diverging with density; hence, the random close packing is inaccessible. Here we perform simulations of self-propelled hard spheres, and we find that with increasing activity the relaxation dynamics can be sped up by orders of magnitude. The glass transition shifts to higher packing fractions upon increasing the activity, allowing the study of sphere packings with fluid-like dynamics at packing fractions close to random close packing. Our study opens new possibilities of investigating dense packings and the glass transition in systems of hard particles

    On the Round Complexity of Randomized Byzantine Agreement

    Get PDF
    We prove lower bounds on the round complexity of randomized Byzantine agreement (BA) protocols, bounding the halting probability of such protocols after one and two rounds. In particular, we prove that: 1) BA protocols resilient against n/3 [resp., n/4] corruptions terminate (under attack) at the end of the first round with probability at most o(1) [resp., 1/2+ o(1)]. 2) BA protocols resilient against n/4 corruptions terminate at the end of the second round with probability at most 1-Theta(1). 3) For a large class of protocols (including all BA protocols used in practice) and under a plausible combinatorial conjecture, BA protocols resilient against n/3 [resp., n/4] corruptions terminate at the end of the second round with probability at most o(1) [resp., 1/2 + o(1)]. The above bounds hold even when the parties use a trusted setup phase, e.g., a public-key infrastructure (PKI). The third bound essentially matches the recent protocol of Micali (ITCS\u2717) that tolerates up to n/3 corruptions and terminates at the end of the third round with constant probability

    Two Sides of the Coin Problem

    Get PDF
    In the coin problem, one is given n independent flips of a coin that has bias b > 0 towards either Head or Tail. The goal is to decide which side the coin is biased towards, with high confidence. An optimal strategy for solving the coin problem is to apply the majority function on the n samples. This simple strategy works as long as b > c(1/sqrt n) for some constant c. However, computing majority is an impossible task for several natural computational models, such as bounded width read once branching programs and AC^0 circuits. Brody and Verbin proved that a length n, width w read once branching program cannot solve the coin problem for b < O(1/(log n)^w). This result was tightened by Steinberger to O(1/(log n)^(w-2)). The coin problem in the model of AC^0 circuits was first studied by Shaltiel and Viola, and later by Aaronson who proved that a depth d size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d+2)). This work has two contributions: 1. We strengthen Steinberger\u27s result and show that any Santha-Vazirani source with bias b < O(1/(log n)^(w-2)) fools length n, width w read once branching programs. In other words, the strong independence assumption in the coin problem is completely redundant in the model of read once branching programs, assuming the bias remains small. That is, the exact same result holds for a much more general class of sources. 2. We tighten Aaronson\u27s result and show that a depth d, size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d-1)). Moreover, our proof technique is different and we believe that it is simpler and more natural

    Formal Proofs of Tarjan\u27s Strongly Connected Components Algorithm in Why3, Coq and Isabelle

    Get PDF
    Comparing provers on a formalization of the same problem is always a valuable exercise. In this paper, we present the formal proof of correctness of a non-trivial algorithm from graph theory that was carried out in three proof assistants: Why3, Coq, and Isabelle

    Breaking the O(n)O(\sqrt n)-Bit Barrier: Byzantine Agreement with Polylog Bits Per Party

    Full text link
    Byzantine agreement (BA), the task of nn parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is O~(1)\tilde O(1) bits per party, but some parties must send Ω(n)\Omega(n) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating O~(n)\tilde O(\sqrt{n}). In this work, we ask whether asymmetry is inherent for optimizing total communication. Our contributions in this line are as follows: 1) We define a cryptographic primitive, succinctly reconstructed distributed signatures (SRDS), that suffices for constructing O~(1)\tilde O(1) balanced BA. We provide two constructions of SRDS from different cryptographic and Public-Key Infrastructure (PKI) assumptions. 2) The SRDS-based BA follows a paradigm of boosting from "almost-everywhere" agreement to full agreement, and does so in a single round. We prove that PKI setup and cryptographic assumptions are necessary for such protocols in which every party sends o(n)o(n) messages. 3) We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems (SNARGs) for a particular type of NP-Complete problems (generalizing Subset-Sum and Subset-Product). Our results provide new approaches forward, as well as limitations and barriers, towards minimizing per-party communication of BA. In particular, we construct the first two BA protocols with O~(1)\tilde O(1) balanced communication, offering a tradeoff between setup and cryptographic assumptions, and answering an open question presented by King and Saia (DISC'09)

    Round-Preserving Parallel Composition of Probabilistic-Termination Protocols

    Get PDF
    An important benchmark for multi-party computation protocols (MPC) is their round complexity. For several important MPC tasks, (tight) lower bounds on the round complexity are known. However, for some of these tasks, such as broadcast, the lower bounds can be circumvented when the termination round of every party is not a priori known, and simultaneous termination is not guaranteed. Protocols with this property are called probabilistic-termination (PT) protocols. Running PT protocols in parallel affects the round complexity of the resulting protocol in somewhat unexpected ways. For instance, an execution of m protocols with constant expected round complexity might take O(log m) rounds to complete. In a seminal work, Ben-Or and El-Yaniv (Distributed Computing \u2703) developed a technique for parallel execution of arbitrarily many broadcast protocols, while preserving expected round complexity. More recently, Cohen et al. (CRYPTO \u2716) devised a framework for universal composition of PT protocols, and provided the first composable parallel-broadcast protocol with a simulation-based proof. These constructions crucially rely on the fact that broadcast is ``privacy free,\u27\u27 and do not generalize to arbitrary protocols in a straightforward way. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity. In this paper we tackle this question and provide both feasibility and infeasibility results. We construct a round-preserving protocol compiler, secure against a dishonest minority of actively corrupted parties, that compiles arbitrary protocols into a protocol realizing their parallel composition, while having a black-box access to the underlying protocols. Furthermore, we prove that the same cannot be achieved, using known techniques, given only black-box access to the functionalities realized by the protocols, unless merely security against semi-honest corruptions is required, for which case we provide a protocol

    Asynchronous Secure Multiparty Computation in Constant Time

    Get PDF
    In the setting of secure multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function. It is well known that if the communication model is asynchronous, meaning that messages can be arbitrarily delayed by an unbounded (yet finite) amount of time, secure computation is feasible if and only if at least two-thirds of the parties are honest, as was shown by Ben-Or, Canetti, and Goldreich [STOC\u2793] and by Ben-Or, Kelmer, and Rabin [PODC\u2794]. The running-time of all currently known protocols depends on the function to evaluate. In this work we present the first asynchronous MPC protocol that runs in constant time. Our starting point is the asynchronous MPC protocol of Hirt, Nielsen, and Przydatek [Eurocrypt\u2705, ICALP\u2708]. We integrate \emph{threshold fully homomorphic encryption} in order to reduce the interactions between the parties, thus completely removing the need for the expensive \emph{king-slaves} approach taken by Hirt et al.. Initially, assuming an honest majority, we construct a constant-time protocol in the asynchronous Byzantine agreement (ABA) hybrid model. Using a concurrent ABA protocol that runs in constant expected time, we obtain a constant expected time asynchronous MPC protocol, secure facing static malicious adversaries, assuming t<n/3
    • …
    corecore